health & medicine
- Energy > Oil & Gas (0.68)
- Health & Medicine > Therapeutic Area > Neurology (0.47)
Incredibly smart or incredibly stupid? What we learned from using ChatGPT for a year
Next month ChatGPT will celebrate its first birthday – marking a year in which the chatbot, for many, turned AI from a futuristic concept to a daily reality. Its universal accessibility has led to a host of concerns, from job losses to disinformation to plagiarism. Over the same period, tens of millions of users have been investigating what the platform can do to make their lives just a little bit easier. Upon its release, users quickly embraced ChatGPT's potential for silliness, asking it to play 20 questions or write its own songs. As its first anniversary approaches, people are using it for a huge range of tasks.
- Media > News (0.35)
- Banking & Finance (0.35)
- Health & Medicine (0.31)
- Law (0.30)
'Dr. Google' meets its match: Dr. ChatGPT
As a fourth-year ophthalmology resident at Emory University School of Medicine, Dr. Riley Lyons' biggest responsibilities include triage: When a patient comes in with an eye-related complaint, Lyons must make an immediate assessment of its urgency. He often finds patients have already turned to "Dr. Online, Lyons said, they are likely to find that "any number of terrible things could be going on based on the symptoms that they're experiencing." So, when two of Lyons' fellow ophthalmologists at Emory came to him and suggested evaluating the accuracy of the AI chatbot ChatGPT in diagnosing eye-related complaints, he jumped at the chance. In June, Lyons and his colleagues reported in medRxiv, an online publisher of preliminary health science studies, that ChatGPT compared quite well to human doctors who reviewed the same symptoms -- and performed vastly better than the symptom checker on the popular health website WebMD. And despite the much-publicized "hallucination" problem known to ...
- North America > Canada (0.30)
- North America > United States > California (0.16)
- Health & Medicine > Consumer Health (0.70)
- Health & Medicine > Therapeutic Area > Ophthalmology/Optometry (0.55)
Deepfake detection tools must work with dark skin tones, experts warn
Detection tools being developed to combat the growing threat of deepfakes – realistic-looking false content – must use training datasets that are inclusive of darker skin tones to avoid bias, experts have warned. Most deepfake detectors are based on a learning strategy that depends largely on the dataset that is used for its training. It then uses AI to detect signs that may not be clear to the human eye. This can include monitoring blood flow and heart rate. However, these detection methods do not always work on people with darker skin tones, and if training sets do not contain all ethnicities, accents, genders, ages and skin-tone, they are open to bias, experts warned.
New model reduces bias and enhances trust in AI decision-making and knowledge organization
Traditional machine learning models often yield biased results, favouring groups with large populations or being influenced by unknown factors, and take extensive effort to identify from instances containing patterns and sub-patterns coming from different classes or primary sources. The medical field is one area where there are severe implications for biased machine learning results. Hospital staff and medical professionals rely on datasets containing thousands of medical records and complex computer algorithms to make critical decisions about patient care. Machine learning is used to sort the data, which saves time. However, specific patient groups with rare symptomatic patterns may go undetected, and mislabeled patients and anomalies could impact diagnostic outcomes.
A Mystery in the E.R.? Ask Dr. Chatbot for a Diagnosis.
Artificial intelligence is transforming many aspects of the practice of medicine, and some medical professionals are using these tools to help them with diagnosis. Doctors at Beth Israel Deaconess, a teaching hospital affiliated with Harvard Medical School, decided to explore how chatbots could be used -- and misused -- in training future doctors. Instructors like Dr. Rodman hope that medical students can turn to GPT-4 and other chatbots for something similar to what doctors call a curbside consult -- when they pull a colleague aside and ask for an opinion about a difficult case. The idea is to use a chatbot in the same way that doctors turn to each other for suggestions and insights. For more than a century, doctors have been portrayed like detectives who gather clues and use them to find the culprit.
Tiny robot could stop bleeding from inside the body using heat
A small robot that can shape-shift and produce heat could incinerate cancer cells or stop bleeding from inside the body. It could also be used to ferry drugs directly to tumours or hard-to-reach places like arteries. Tiny robots with soft bodies have shown promise for delivering drugs without causing damage – but adding hard elements could make them more useful. Ren Hao Soon at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, and his colleagues designed the centimetre-sized robot to have overlapping aluminium plates inspired by pangolins, the only mammal with scales. They layered rectangular "scales" over softer, magnetic material, which let the robot change its shape.
Advanced universal control system may revolutionize lower limb exoskeleton control and optimize user experience
While advances in wearable robotics have helped restore mobility for people with lower limb impairments, current control methods for exoskeletons are limited in their ability to provide natural and intuitive movements for users. This can compromise balance and contribute to user fatigue and discomfort. Few studies have focused on the development of robust controllers that can optimize the user's experience in terms of safety and independence. Existing exoskeletons for lower limb rehabilitation employ a variety of technologies to help the user maintain balance, including special crutches and sensors, according to co-author Ghaith Androwis, PhD, senior research scientist in the Center for Mobility and Rehabilitation Engineering Research at Kessler Foundation and director of the Center's Rehabilitation Robotics and Research Laboratory. Exoskeletons that operate without such helpers allow more independent walking, but at the cost of added weight and slow walking speed.
The existential threat from AI – and from humans misusing it Letters
Regarding Jonathan Freedland's article about AI (The future of AI is chilling – humans have to act together to overcome this threat to civilisation, 26 May), isn't worrying about whether an AI is "sentient" rather like worrying whether a prosthetic limb is "alive"? There isn't even any evidence that "sentience" is a thing. More likely, like life, it is a bunch of distinct capabilities interacting, and "AI" (ie disembodied artificial intellect) is unlikely to reproduce more than a couple of those capabilities. That's because it is an attempt to reproduce the function of just a small part of the human brain: more particularly, of the evolutionarily new part. Our motivation to pursue self-interest comes from a billion years of evolution of the old brain, which AI is not based upon.